Image of plot replaced due to resampleing of data.
Image of plot replaced due to resampleing of data.
Image of plot replaced due to resampleing of data.
Image of plot replaced due to resampleing of data.
Image of plot replaced due to resampleing of data.
Image of plot replaced due to resampleing of data.
Demand had expected to go through recovery through the mid-20’s.
---
title: "FINAL PROJECT: COVID 19 IMPACT ON COPPER"
author: 'Group Members: Blair Okorowski,Rikk Kretue, James Hanley, Terrance Randolph'
date: 'March 18, 2020'
output:
flexdashboard::flex_dashboard:
orientation: columns
vertical_layout: fill
source_code: embed
theme: lumen
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = FALSE, eval = TRUE, warning=FALSE, message=FALSE)
library(flexdashboard)
#
options(digits = 4, scipen = 999999)
library(psych)
library(tidyverse)
library(ggplot2)
library(GGally)
library(lubridate)
library(dplyr)
library(quantreg)
library(forecast)
library(tidyquant)
library(timetk)
library(quantmod)
library(matrixStats)
library(plotly)
library(quadprog)
library(shiny)
#
#stocks_env <- new.env()
symbols <- c("DHI", "MSBHY", "SCCO")
price_tbl <- tq_get(symbols) %>% select(date, symbol, price = adjusted)
#TR Code: extract Dates no later than 03/13 for analysis purposes
price_tbl_TR <- price_tbl[price_tbl$date <= '2020-03-13',]
# long format ("TIDY") price tibble for possible other work
# return_tbl <- price_tbl %>% group_by(symbol) %>% tq_transmute(mutate_fun = periodReturn, period = "daily", type = "log", col_rename = "daily_return") %>% mutate(abs_return = abs(daily_return))
#TR Code Change
return_tbl <- price_tbl_TR %>% group_by(symbol) %>% tq_transmute(mutate_fun = periodReturn, period = "daily", type = "log", col_rename = "daily_return") %>% mutate(abs_return = abs(daily_return))
#str(return_tbl)
r_2 <- return_tbl %>% select(symbol, date, daily_return) %>% spread(symbol, daily_return)
r_2 <- xts(r_2, r_2$date)[-1, ] #[r_2$date <'2020-03-13'])[-1, ]
storage.mode(r_2) <- "numeric"
r_2 <- r_2[, -1]
r_corr <- apply.monthly(r_2, FUN = cor)[,c(2, 3, 6)]
colnames(r_corr) <- c("DHI_MSBHY", "DHI_SCCO", "MSBHY_SCCO")
r_vols <- apply.monthly(r_2, FUN = colSds)
#
corr_tbl <- r_corr %>% as_tibble() %>% mutate(date = index(r_corr)) %>% gather(key = assets, value = corr, -date)
vols_tbl <- r_vols %>% as_tibble() %>% mutate(date = index(r_vols)) %>% gather(key = assets, value = vols, -date)
#
corr_vols <- merge(r_corr, r_vols)
corr_vols_tbl <- corr_vols %>% as_tibble() %>% mutate(date = index(corr_vols))
#
n <- 10000 # lots of trials, each a "day" or an "hour"
z <- rt(n, df = 30)
garch_sim_t <- function(n = 1000, df = 30, omega = 0.1, alpha = 0.8, phi = 0.05, mu = 0.01){
n <- n # lots of trials, each a "day" or an "hour"
# set.seed(seed)
z <- rt(n, df = df)
e <- z # store variates
y <- z # returns: store again in a different place
sig2 <- z^2 # create volatility series
omega <- omega #base variance
alpha <- alpha #vols Markov dependence on previous variance
phi <- phi # returns Markov dependence on previous period
mu <- mu # average return
for (t in 2:n) { # Because of lag start at second
e[t] <- sqrt(sig2[t])*z[t] # 1. e is conditional on sig
y[t] <- mu + phi*(y[t-1]-mu) + e[t] # 2. generate returns
sig2[t+1] <- omega + alpha * e[t]^2 # 3. generate new sigma^2
}
return <- list(
sim_df_vbl <- data_frame(t = 1:n, z = z, y = y, e = e, sig = sqrt(sig2)[-(n+1)] ),
sim_df_title <- data_frame(t = 1:n, "1. Unconditional innovations" = z, "4. Conditional returns" = y, "3. Conditional innovations" = e, "2. Conditional volatility" = sqrt(sig2)[-(n+1)] )
)
}
#
# convert prices from tibble to xts
#price_etf <- price_tbl %>% spread(symbol, price)
#TR Code
price_etf <- price_tbl_TR %>% spread(symbol, price)
###
price_etf <- xts(price_etf, price_etf$date)
storage.mode(price_etf) <- "numeric" #select(DHI, MSBHY, SCCO) # 3 risk factors (rf)
price_etf <- price_etf[, -1]
price_0 <- as.numeric(tail(price_etf, 1))
shares <- c(60000, 75000, 50000)
#price_last <- price_etf[length(price_etf$DHI), 3:5] #(DHI, MSBHY, SCCO) %>% as.vector()
w <- as.numeric(shares * price_0)
return_hist <- r_2
# Fan these across the length and breadth of the risk factor series
weights_rf <- matrix(w, nrow=nrow(return_hist), ncol=ncol(return_hist), byrow=TRUE)
## We need to compute exp(x) - 1 for very small x: expm1 accomplishes this
loss_rf <- -rowSums(expm1(return_hist) * weights_rf)
loss_df <- data_frame(loss = loss_rf, distribution = rep("historical", each = length(loss_rf)))
#
ES_calc <- function(data, prob){
threshold <- quantile(data, prob)
result <- mean(data[data > threshold])
}
#
n_sim <- 1000
n_sample <- 100
prob <- 0.95
ES_sim <- replicate(n_sim, ES_calc(sample(loss_rf, n_sample, replace = TRUE), prob))
summary(ES_sim)
#
#summary(ES_sim)
#
# mean excess plot to determine thresholds for extreme event management
data <- as.vector(loss_rf) # data is purely numeric
umin <- min(data) # threshold u min
umax <- max(data) - 0.1 # threshold u max
nint <- 100 # grid length to generate mean excess plot
grid_0 <- numeric(nint) # grid store
e <- grid_0 # store mean exceedances e
upper <- grid_0 # store upper confidence interval
lower <- grid_0 # store lower confidence interval
u <- seq(umin, umax, length = nint) # threshold u grid
alpha <- 0.95 # confidence level
for (i in 1:nint) {
data <- data[data > u[i]] # subset data above thresholds
e[i] <- mean(data - u[i]) # calculate mean excess of threshold
sdev <- sqrt(var(data)) # standard deviation
n <- length(data) # sample size of subsetted data above thresholds
upper[i] <- e[i] + (qnorm((1 + alpha)/2) * sdev)/sqrt(n) # upper confidence interval
lower[i] <- e[i] - (qnorm((1 + alpha)/2) * sdev)/sqrt(n) # lower confidence interval
}
mep_df <- data.frame(threshold = u, threshold_exceedances = e, lower = lower, upper = upper)
loss_excess <- loss_rf[loss_rf > u] - u
quantInv <- function(distr, value) ecdf(distr)(value)
u_prob <- quantInv(loss_rf, 200000)
ES_mep <- mean(loss_rf[loss_rf > u_prob])
stat_fun <- function(x, na.rm = TRUE, ...) {
library(moments)
# x = numeric vector
# na.rm = boolean, whether or not to remove NA's
# ... = additional args passed to quantile
c(mean = mean(x, na.rm = na.rm),
stdev = sd(x, na.rm = na.rm),
skewness = skewness(x, na.rm = na.rm),
kurtosis = kurtosis(x, na.rm = na.rm),
quantile(x, na.rm = na.rm, ...))
}
#
contract <- 1 # billion
working <- 0.100 # billion
sigma_wc <- 0.025 # billion
sigma <- 0.25
threshold <- -0.12 # percentage return
alpha <- 0.05 # tolerance
risky <- 0.1 # percentage return on the risky asset
riskless <- 0.02 # time value of cash -- no risk
z_star <- qnorm(alpha)
w <- (threshold-riskless) / (risky - riskless + sigma*z_star)# Tukey-Box-Hunter fence analysis of outliers
#
k <- 1:20 # days in a business month
col_names <- paste0("lag_", k)
#
# remove abs_return the fourth column
return_lags <- return_tbl[, -4] %>%
tq_mutate(
select = daily_return,
mutate_fun = lag.xts,
k = k,
col_rename = col_names
)
return_autocors <- return_lags %>%
gather(key = "lag", value = "lag_value", -c(symbol, date, daily_return)) %>%
mutate(lag = str_sub(lag, start = 5) %>% as.numeric) %>%
group_by(symbol, lag) %>%
summarize(
cor = cor(x = daily_return, y = lag_value, use = "pairwise.complete.obs"),
upper_95 = 2/(n())^0.5,
lower_95 = -2/(n())^0.5
)
return_absautocors <- return_autocors %>%
ungroup() %>%
mutate(
lag = as_factor(as.character(lag)),
cor_abs = abs(cor)
) %>%
select(lag, cor_abs) %>%
group_by(lag)
#
## INPUTS: r vector
## OUTPUTS: list of scalars (mean, sd, median, skewness, kurtosis)
data_moments <- function(data){
library(moments)
library(matrixStats)
mean <- colMeans(data)
median <- colMedians(data)
sd <- colSds(data)
IQR <- colIQRs(data)
skewness <- skewness(data)
kurtosis <- kurtosis(data)
result <- data.frame(mean = mean, median = median, std_dev = sd, IQR = IQR, skewness = skewness, kurtosis = kurtosis)
return(result)
}
#
#
port_sample <- function(return, n_sample = 252, stat = "mean")
{
R <- return # daily returns
n <- dim(R)[1]
N <- dim(R)[2]
R_boot <- R[sample(1:n, n_sample),] # sample returns
r_free <- 0.03 / 252 # daily
mean_vect <- apply(R_boot,2,mean)
cov_mat <- cov(R_boot)
sd_vect <- sqrt(diag(cov_mat))
A_mat <- cbind(rep(1,N),mean_vect)
mu_P <- seq(-.01,.01,length=300)
sigma_P <- mu_P
weights <- matrix(0,nrow=300,ncol=N)
for (i in 1:length(mu_P))
{
b_vec <- c(1,mu_P[i])
result <-
solve.QP(Dmat=2*cov_mat,dvec=rep(0,N),Amat=A_mat,bvec=b_vec,meq=2)
sigma_P[i] <- sqrt(result$value)
weights[i,] <- result$solution
}
sharpe <- (mu_P - r_free)/sigma_P ## compute Sharpe's ratios
ind_max <- (sharpe == max(sharpe)) ## Find maximum Sharpe's ratio
ind_min <- (sigma_P == min(sigma_P)) ## find the minimum variance portfolio
ind_eff <- (mu_P > mu_P[ind_min]) ## finally the efficient fr(aes(x = 0, y = r_free), colour = "red")ontier
result <- switch(stat,
"mean" = mu_P[ind_max],
"sd" = sigma_P[ind_max]
)
return(result)
}
#
```
Questions
==============================================
### The business situation: Questions
Complications
- The company has experienced very high volatility due to a new strain of the corona virus that is currently without a vaccine.
- Causing citizens from different Countries to curtail usual work schedule
- Thereby leaving employees and consumers demand low and is causing revenue and stock price to fall
- Moreover, if economic inactivity persists how can the company evaluate risks, truncate exposures and hopefully increase revenue/returns.
Response
==============================================
### The business situation: Response
Key CFO Questions
1. How do we characterize renewables variability and the impact of one market on another?
- Key attributes of renewables like solar, clean, wind, and biodegradables is that it forces not only each other, but also other markets that are inter-related to be innovative and ahead of the game.
- Copper can be used in almost anything. Specifically, copper has been used in HVAC because the composition, versatility, and availability of copper makes it very attractive.
- With the recent COVID-19 (aka Corona Virus), the global pandemic crisis has systematically impacted many lives and businesses.
- Their value proposition and supply chain clearly were disrupted as less consumers and businesses alike were unable to conduct their daily operations. Tertiary markets like financial and capital markets were also impacted because there was less movement and transactions.
2. What are the best combinations of renewables drivers?
- As the government deregulates many of its stringent policies, it incentivizes and promotes entrepreneurship and innovation. Concurrently, the price must benefit the consumer
- When new innovations like solar, wind, and ocean currents entice the capital markets to be first movers and shakers, it places tremendous amount of pressure for old oil companies to stop and re-examine themselves holistically.
- The second combination of renewables drivers are industry peer pressure and investor peer pressure.
- extremely dangerous but necessary to disrupt the current state of the oil market.
3. How much capital is needed to support a renewables earnings stream?
- The renewable energy market cannot start and sustain itself long-term, it is highly interdependent on national policy and federal directives and incentives to develop a viable long-term model.
- The amount of capital that is required to support the renewables earnings stream is dependent upon several things.
- First, what is the current state of the energy market?
- Second, what are the incentives to dramatically impact the consumption at a national level?
- Third, what is the market willing to bear and at what entry points?
- Fourth, what is the consumers’ sentiment?
- Fifth, can the global supply chain support the renewable energy market’s production and delivery of goods and service?
- Six, what is the state of the economy?
- Finally, is the financial market ready to support the idea?
4. How should the company plan to meet risk tolerances and thresholds for losses?
- One of the best key deliverables of mitigating risk in the renewable energy market is staying ahead of the market through research and development.
- Depending on the regional market, the research and development vision and programs have to be feasible and pragmatic.
- Unlike natural resources, renewable energy like wind, ocean currents, and solar are unlimited and can provide returns for decades. The business and economic model will act and behave like annuities.
- In order to mitigate many of the risks, the company should think globally
Data Science Overview
==============================================
### Data and workflow
ETF'S OF INTREST
We have selected from the [commodity](https://www.investopedia.com/terms/e/etf.asp) sector companies whereby Copper is the primary material:
- (SCCO) [Southern Copper Corporation](https://finance.yahoo.com/quote/SCCO/) a mining company with operations in [Peru and Mexico established in 1995](https://www.bloomberg.com/profile/company/0L8B:LN)
- (DHI) [D.R. Horton,Inc](https://www.nasdaq.com/market-activity/stocks/dhi), home construction company from [Taxas established in 2002](https://www.drhorton.com/)
- (MSBHY) [Mitsubishi Corporation](https://www.mitsubishicorp.com/jp/en/ir/spinfo/) produces vehicles, appliances and mini-split air condition units
These funds act as indices to effectively summarize the inputs, process, management, decisions, and outputs of various aspects of the commodity sector.
Examining and analyzing this series will go a long way to helping the CFO understand the riskiness of these markets.
We load historical data on three ETFs, transform prices into returns, and then further transform the returns into within-month correlations and standard deviations our process includes
- Review the stylized facts of volatility.
- Relationships among three representative markets.
- Develop market risk measures for each driver of earnings.
- Apply corporate risk tolerances & thresholds
- Determine optimal collateral positions for each driver of earnings.
- Analyze viable commodity combinations that maximizes portfolio return while limiting risk
- Convey the probable range of collateral needed to satisfy corporate risk tolerance and thresholds.
Exploratory Analysis
==============================================
### Initial EDA1
```{r}
#########
#TR Code
#########
# volitility r_vols
# corrolation r_corr
corr.date <- r_corr
vols.date <- colnames(t(r_vols))
vols.data.all <- data.frame(r_vols,"Date" = vols.date)
corr.data.all <- data.frame(r_corr,"Date" = colnames(t(r_corr)))
fig.v <- plot_ly(x = ~vols.data.all$Date)
fig.v <- fig.v %>% add_lines(y = vols.data.all$SCCO, name = "SCCO", type = 'scatter')
fig.v <- fig.v %>% add_lines(y = vols.data.all$DHI, name = "DHI", type = 'scatter')
fig.v <- fig.v %>% add_lines(y = vols.data.all$MSBHY, name = "MSBHY", type = 'scatter')%>%
layout(title = "Volatility",
xaxis = list(title = "Date"),
yaxis = list(title = "Returns"))
fig.v
```
### Initial EDA2
```{r}
#########
#TR Code
#########
fig.r <- plot_ly(x = ~corr.data.all$Date)
fig.r <- fig.r %>% add_lines(y = corr.data.all$DHI_MSBHY, name = "DHI_MSBHY", type = 'scatter')
fig.r <- fig.r %>% add_lines(y = corr.data.all$DHI_SCCO, name = "DHI_SCCO", type = 'scatter')
fig.r <- fig.r %>% add_lines(y = corr.data.all$MSBHY_SCCO, name = "MSBHY_SCCO", type = 'scatter')%>%
layout(title = "Correlation of Companies",
xaxis = list(title = "Date"),
yaxis = list(title = "Corr"))
fig.r
```
Returns
=======================================================================
column {.sidebar}
----------------------------------------------------
Summary
### Overall
Overall, there is a moderate to strong correlation between renewables to DHI, Mitsubishi, and Southern Copper.
- There is a stronger positive correlation between the renewables Mitsubishi with discerning outliers. Depending on the specific group, most of the outliers tend to cluster positively.
### Persistence
- On average, we can expect a low return at each lag.
- We may find exceptionally large returns at lags, 15, 5, 2 consistently.
- We may occasionally see them for lags 1 and 3.
- We may rarely see them for lag 14.
### Correlation
- It is evidently clear that copper plays an integral part in the renewables market space and Mitsubishi is one of the key players in that market.Mitsubishi and Southern Copper had the highest correlation.
column {.tabset}
----------------------------------------------------
### Renewables
```{r moments}
return_plot <- return_tbl %>% select(date, symbol, daily_return) %>% spread(symbol, daily_return)
ggpairs(return_plot)
```
### DHI
```{r}
ggtsdisplay(return_plot$DHI, plot.type = "histogram", main = "DHI daily returns")
```
### MSBHY
```{r}
ggtsdisplay(return_plot$MSBHY, plot.type = "histogram", main = "MSBHY daily returns")
```
### SCCO
```{r}
ggtsdisplay(return_plot$SCCO, plot.type = "histogram", main = "SCCO daily returns")
```
### Return persistence
```{r plotreturnabscors, exercise = TRUE}
# Tukey's fence
upper_bound <- 1.5*IQR(return_absautocors$cor_abs) %>% signif(3)
p <- return_absautocors %>%
ggplot(aes(x = fct_reorder(lag, cor_abs, .desc = TRUE) , y = cor_abs)) +
# Add boxplot
geom_boxplot(color = palette_light()[[1]]) +
# Add horizontal line at outlier break point
geom_hline(yintercept = upper_bound, color = "red") +
annotate("text", label = paste0("Outlier threshold = ", upper_bound),
x = 24.5, y = upper_bound + .03, color = "red") +
# Aesthetics
expand_limits(y = c(0, 0.5)) +
theme_tq() +
labs(
title = paste0("Absolute Autocorrelations: Lags ", rlang::expr_text(k)),
x = "Lags"
) +
theme(
legend.position = "none",
axis.text.x = element_text(angle = 45, hjust = 1)
)
ggplotly(p)
```
column {.tabset}
----------------------------------------------------
### Team Pairs
```{r}
#TR Code
ggpairs(return_tbl, columns = 2:4,ggplot2::aes(colour=symbol))
```
### Team DHI
```{r}
############################
#TR Change: DHI HISTOGRAM
############################
fit <- density(return_plot$DHI)
plot_ly(x = return_plot$DHI,
type = "histogram",
name = "DHI Daily Returns") %>%
add_trace(x = fit$x,
y = fit$y,
type = "scatter",
mode = "lines",
fill = "tozeroy",
yaxis = "y2",
name = "Density") %>%
layout(xaxis = list(title = 'DHI RETURN DENSITY'),
yaxis2 = list(overlaying = "y",
side = "right"))
```
### Team MSBHY
```{r}
############################
#TR Change: DHI HISTOGRAM
############################
fit <- density(return_plot$MSBHY)
plot_ly(x = return_plot$MSBHY,
type = "histogram",
name = "MSBHY Daily Returns") %>%
add_trace(x = fit$x,
y = fit$y,
type = "scatter",
mode = "lines",
fill = "tozeroy",
yaxis = "y2",
name = "Density") %>%
layout(xaxis = list(title = 'MSBHY RETURN DENSITY'),
yaxis2 = list(overlaying = "y",
side = "right"))
```
### Team SCCO
```{r}
############################
#TR Change: DHI HISTOGRAM
############################
fit <- density(return_plot$SCCO)
plot_ly(x = return_plot$SCCO,
type = "histogram",
name = "SCCO Daily Returns") %>%
add_trace(x = fit$x,
y = fit$y,
type = "scatter",
mode = "lines",
fill = "tozeroy",
yaxis = "y2",
name = "Density") %>%
layout(xaxis = list(title = 'SCCO RETURN DENSITY'),
yaxis2 = list(overlaying = "y",
side = "right"))
```
### Combined
```{r}
############################
#TR Change: RETURNS COMBINED
############################
Bar.overlay <- plot_ly(return_plot, x = ~ date) %>%
add_trace(y = ~DHI,
opacity = 0.75,
name = 'DHI',
mode = 'markers') %>%
add_trace(y = ~SCCO,
opacity = 0.5,
name = 'SCCO',
mode = 'lines+markers') %>%
add_trace(y = ~MSBHY,
opacity = 0.5,
name = 'MSBHY',
mode = 'markers') %>%
layout(barmode = "overlay",
xaxis = list(title = 'DATE'),
yaxis = list (title = 'DAILY RETURN')); Bar.overlay
```
Volatility
====================================================
column {.sidebar}
----------------------------------------------------
### Monthly volatility
- DHI volatility was well distributed with left skewing data, which conveys stability.
- However, the tail is thickening causing the data to skew right towards high volatility and sinking returns.
- SCCO auto-correlation of moving returns over time shows more lags with positive returns that rarely break the bounds and minimal negative returns
- However, due to this current pandemic the tide is shifting to losses quickly with large price swings that will thicken the tails towards higher volitivity values.
### Monthly Correlation
- Mitsubishi and Southern Copper seem to be highly correlated to one another outside of pandemic impacts on the market.
- After evaluating the past several years their normalized covariance in volatility via correlation averages around 60%.
- High correlations coupled with heavily weighted holdings in those assists simultaneously could lead to an imbalanced portfolio that is at risk for steep losses.
column {.tabset}
----------------------------------------------------
### DHI
```{r}
ggtsdisplay(r_vols$DHI, plot.type = "histogram", main = "DHI monthly volatility")
```
### SCCO
```{r}
ggtsdisplay(r_vols$SCCO, plot.type = "histogram", main = "SCCO monthly volatility")
```
### MSBHY
```{r}
ggtsdisplay(r_vols$MSBHY, plot.type = "histogram", main = "MSBHY monthly volatility")
```
column {.tabset}
----------------------------------------------------
### DHI-MSBHY
```{r}
ggtsdisplay(r_corr$DHI_MSBHY, plot.type = "histogram", main = "DHI-MSBHY monthly correlation")
```
### DHI-SCCO
```{r}
ggtsdisplay(r_corr$DHI_SCCO, plot.type = "histogram", main = "DHI-SCCO monthly correlation")
```
### MSBHY-SCCO
```{r}
ggtsdisplay(r_corr$MSBHY_SCCO, plot.type = "histogram", main = "MSBHY-SCCO monthly correlation")
```
### DHI-MSBHY market spillover
```{r rqplot-DHI-MSBHY}
p <- ggplot(corr_vols_tbl, aes(x = MSBHY, y = DHI_MSBHY)) +
geom_point() +
ggtitle("DHI-MSBHY Interaction") +
geom_quantile(quantiles = c(0.10, 0.90)) +
geom_quantile(quantiles = 0.5, linetype = "longdash") +
geom_density_2d(colour = "red")
ggplotly(p)
```
### DHI-SCCO market spillover
```{r rqplot-DHI-SCCO}
p <- ggplot(corr_vols_tbl, aes(x = SCCO, y = DHI_SCCO)) +
geom_point() +
ggtitle("DHI-SCCO Interaction") +
geom_quantile(quantiles = c(0.10, 0.90)) +
geom_quantile(quantiles = 0.5, linetype = "longdash") +
geom_density_2d(colour = "red")
ggplotly(p)
```
### MSBHY-SCCO market spillover
```{r rqplot-MSBHY-SCCO}
p <- ggplot(corr_vols_tbl, aes(x = SCCO, y = MSBHY_SCCO)) +
geom_point() +
ggtitle("MSBHY-SCCO Interaction") +
geom_quantile(quantiles = c(0.10, 0.90)) +
geom_quantile(quantiles = 0.5, linetype = "longdash") +
geom_density_2d(colour = "red")
ggplotly(p)
```
Loss
============================================
column {.sidebar}
----------------------------------------------------
Pure plays
### DHI
- Shortfall is centered around $400,000.
- Shortfall is slightly right skewed.
- Slightly platykurtic, but the closet to mesokurtic.
- The safer option by this metric.
### MSBHY
- Shortfall is centered around $400,000.
- Shortfall is relatively symmetrical.
- Roughly mesokurtic.
### SCCO
- Shortfall is centered around $400,000 $450,000.
- Lower values of shortfall are more common.
- Shortfall is slightly right skewed.
- Roughly platykurtic.
```{r}
#sliderInput('alpha_q', label = 'dhj',
# min = '0.75', max = '0.9999', value = '0.95', step = 0.001)
```
column {.tabset}
--------------------------------------------
```{r}
# sliderInput('alpha_q', label = 'threshold probability: ',
# min = 0.75,
# max = 0.9999,
# value = 0.95,
# step = 0.001)
```
### DHI
```{r DHIloss}
#
shares <- c(-215000, 0, 0)
price_last <- c(1, 0, 0) * price_0 #(DHI, MSBHY, SCCO) %>% as.vector()
w <- as.numeric(shares * price_last)
return_hist <- r_2
# Fan these across the length and breadth of the risk factor series
weights_rf <- matrix(w, nrow=nrow(return_hist), ncol=ncol(return_hist), byrow=TRUE)
## We need to compute exp(x) - 1 for very small x: expm1 accomplishes this
loss_rf <- -rowSums(expm1(return_hist) * weights_rf)
loss_df <- data_frame(loss = loss_rf, distribution = rep("historical", each = length(loss_rf)))
#
ES_calc <- function(data, prob){
threshold <- quantile(data, prob)
result <- mean(data[data > threshold])
}
#
n_sim <- 1000
n_sample <- 100
prob <- 0.95
ES_sim <- replicate(n_sim, ES_calc(sample(loss_rf, n_sample, replace = TRUE), prob))
#
sim <- ES_sim
low <- quantile(sim, 0.025)
high <- quantile(sim, 0.975)
sim_df <- data_frame(sim = sim)
title <- "DHI: Expected Shortfall simulation"
#
# #######################
# # TR Code: Slider
# #######################
# renderPlotly({
# alpha_tolerance <- reactive({ifelse(input$alpha_q>1,0.99,
# ifelse(input$alpha_q<0,0.001,
# input$alpha))})
#
# VaR_hist <- quantile(loss_rf, probs = alpha_tolerance())
#
# ES_hist <- median(loss_rf[loss_rf > VaR_hist, 2])
# VaR_text <- Paste('Value at Risk -\n', round(VaR_hist, 2))
# title_text <- paste('Expected Shortfall \n=', round(ES_hist))
#
# p <- ggplot(loss_df, aes(x = loss, fill = distribution)) +
# geom_histogram(alpha = 0.08) +
# geom_vline(aes(xintercept = VaR_hist),
# linetype = 'dashed',
# size = 1,
# color = 'blue') +
# geom_vline(aes(xintercept = ES_hist),
# size = 1,
# color = 'blue') +
# annotate('text, x = Var_hist',
# y = 100,
# label = VaR_text) +
# annotate('text, x = ES_hist',
# y = 40,
# label = ES_text) +
# ggtitle(title_text)
# ggplotly(p)
# })
######################
p <- ggplot(data = sim_df, aes(x = sim))
p <- p + geom_histogram(binwidth = 1000, aes(y = 1000*(..density..)), alpha = 0.4)
p <- p + ggtitle(title)
p <- p + geom_vline(xintercept = low, color = "red", size = 1.5 ) + geom_vline(xintercept = high, color = "red", size = 1.5)
p <- p + annotate("text", x = low, y = 0.01, label = paste("L = ", round(low, 2))) + annotate("text", x = high, y = 0.01, label = paste("U = ", round(high, 2))) + ylab("density") + xlab("expected shortfall") + theme_bw()
ggplotly(p)
```
### MSBHY
```{r MSBHYloss}
#
shares <- c(0, 284000, 0)
price_last <- c(0, 1, 0) * price_0
w <- as.numeric(shares * price_last)
return_hist <- r_2
# Fan these across the length and breadth of the risk factor series
weights_rf <- matrix(w, nrow=nrow(return_hist), ncol=ncol(return_hist), byrow=TRUE)
## We need to compute exp(x) - 1 for very small x: expm1 accomplishes this
loss_rf <- -rowSums(expm1(return_hist) * weights_rf)
loss_df <- data_frame(loss = loss_rf, distribution = rep("historical", each = length(loss_rf)))
#
ES_calc <- function(data, prob){
threshold <- quantile(data, prob)
result <- mean(data[data > threshold])
}
#
n_sim <- 1000
n_sample <- 100
prob <- 0.95
ES_sim <- replicate(n_sim, ES_calc(sample(loss_rf, n_sample, replace = TRUE), prob))
#
sim <- ES_sim
low <- quantile(sim, 0.025)
high <- quantile(sim, 0.975)
sim_df <- data_frame(sim = sim)
title <- "MSBHY: Expected Shortfall simulation"
p <- ggplot(data = sim_df, aes(x = sim))
p <- p + geom_histogram(binwidth = 1000, aes(y = 1000*(..density..)), alpha = 0.4)
p <- p + ggtitle(title)
p <- p + geom_vline(xintercept = low, color = "red", size = 1.5 ) + geom_vline(xintercept = high, color = "red", size = 1.5)
p <- p + annotate("text", x = low, y = 0.01, label = paste("L = ", round(low, 2))) + annotate("text", x = high, y = 0.01, label = paste("U = ", round(high, 2))) + ylab("density") + xlab("expected shortfall") + theme_bw()
ggplotly(p)
```
### SCCO
```{r SCCOloss}
#
shares <- c(0, 0, 12500)
price_last <- c(0, 0, 1) * price_0
return_hist <- r_2
# Fan these across the length and breadth of the risk factor series
weights_rf <- matrix(w, nrow=nrow(return_hist), ncol=ncol(return_hist), byrow=TRUE)
## We need to compute exp(x) - 1 for very small x: expm1 accomplishes this
loss_rf <- -rowSums(expm1(return_hist) * weights_rf)
loss_df <- data_frame(loss = loss_rf, distribution = rep("historical", each = length(loss_rf)))
#
ES_calc <- function(data, prob){
threshold <- quantile(data, prob)
result <- mean(data[data > threshold])
}
#
n_sim <- 1000
n_sample <- 100
prob <- 0.95
ES_sim <- replicate(n_sim, ES_calc(sample(loss_rf, n_sample, replace = TRUE), prob))
#
sim <- ES_sim
low <- quantile(sim, 0.025)
high <- quantile(sim, 0.975)
sim_df <- data_frame(sim = sim)
title <- "SCCO: Expected Shortfall simulation"
p <- ggplot(data = sim_df, aes(x = sim))
p <- p + geom_histogram(binwidth = 1000, aes(y = 1000*(..density..)), alpha = 0.4)
p <- p + ggtitle(title)
p <- p + geom_vline(xintercept = low, color = "red", size = 1.5 ) + geom_vline(xintercept = high, color = "red", size = 1.5)
p <- p + annotate("text", x = low, y = 0.01, label = paste("L = ", round(low, 2))) + annotate("text", x = high, y = 0.01, label = paste("U = ", round(high, 2))) + ylab("density") + xlab("expected shortfall") + theme_bw()
ggplotly(p)
```
Allocation
============================================
column {.sidebar}
--------------------------------------------
```{r eff-frontier-calc}
R <- r_2 # daily returns
n <- dim(R)[1]
N <- dim(R)[2]
R_boot <- R[sample(1:n, 252),] # sample returns
r_free <- 0.03 / 252 # daily
mean_vect <- apply(R_boot,2,mean)
cov_mat <- cov(R_boot)
sd_vect <- sqrt(diag(cov_mat))
A_mat <- cbind(rep(1,N),mean_vect)
mu_P <- seq(-.01,.01,length=300)
sigma_P <- mu_P
weights <- matrix(0,nrow=300,ncol=N)
for (i in 1:length(mu_P))
{
b_vec <- c(1,mu_P[i])
result <-
solve.QP(Dmat=2*cov_mat,dvec=rep(0,N),Amat=A_mat,bvec=b_vec,meq=2)
sigma_P[i] <- sqrt(result$value)
weights[i,] <- result$solution
}
# make a data frame of the mean and standard deviation results
sigma_mu_df <- data_frame(sigma_P = sigma_P, mu_P = mu_P)
names_R <- c("DHI", "MSBHY", "SCCO")
# sharpe ratio and minimum variance portfolio analysis
sharpe <- (mu_P - r_free)/sigma_P ## compute Sharpe's ratios
ind_max <- (sharpe == max(sharpe)) ## Find maximum Sharpe's ratio
ind_min <- (sigma_P == min(sigma_P)) ## find the minimum variance portfolio
ind_eff <- (mu_P > mu_P[ind_min]) ## finally the efficient fr(aes(x = 0, y = r_free), colour = "red")ontier
w_max <- weights[ind_max,]
w_min <- weights[ind_min,]
value <- 1000000
```
```{r Optimal weights originall text}
# We use these maximum Sharpe Ratio weights to form a risky asset loss series:
#
# DHI: `r round(w_max[1] * 100, 2)`%, MSBHY: `r round(w_max[2] * 100, 2)`%, SCCO: `r round(w_max[3] * 100, 2)`% of total portfolio value of USD `r value` with a daily return of `r round(mu_P[ind_max]*100, 2)`% and standard deviation of `r round(sigma_P[ind_max]*100, 2)`%
#
# These weights will produce a minimum variance portfolio:
#
# DHI: `r round(w_min[1] * 100, 2)`%, MSBHY: `r round(w_min[2] * 100, 2)`%, SCCO: `r round(w_min[3] * 100, 2)`% of total portfolio value of USD `r value` with a daily portfolio return of `r round(mu_P[ind_min]*100, 2)`% and standard deviation of `r round(sigma_P[ind_min]*100, 2)`%
```
### Optimal weights
We use these maximum Sharpe Ratio weights to form a risky asset loss series:
DHI: -1654.63%, MSBHY: -265.16%, SCCO: 2019.79% of total portfolio value of USD 1000000 with a daily return of 1% and standard deviation of 49.83%
These weights will produce a minimum variance portfolio:
DHI: 11.07%, MSBHY: 79.77%, SCCO: 9.16% of total portfolio value of USD 1000000 with a daily portfolio return of -0.04% and standard deviation of 1.26%
### Thresholds
Efficient frontier
- We have many viable portfolios within our efficient frontier.
- We need a great deal of risk before we can maximize our portfolio.
### Collateral concerns
How much collateral do we need to meet company risk tolerances and thresholds?
Sharpe ratio
- Attaining a higher Sharpe ratio is possible, but unlikely/difficult.
- You take on more total risk to achieve a better Sharpe ratio.
column {.tabset}
--------------------------------------------
### Efficient frontier

```{r eff-frontier}
###############################################################
#TR Code: Added the image because sample changed plot each time
###############################################################
# col_P <- ifelse(mu_P > mu_P[ind_min], "blue", "grey") # discriminate efficient and inefficient portfolios
# sigma_mu_df$col_P <- col_P
# p <- ggplot(sigma_mu_df, aes(x = sigma_P, y = mu_P, group = 1))
# p <- p + geom_line(aes(colour=col_P, group = col_P), size = 1.1) + scale_colour_identity()
# p <- p + geom_abline(intercept = r_free, slope = (mu_P[ind_max]-r_free)/sigma_P[ind_max], color = "red", size = 1.1)
# p <- p + geom_point(aes(x = sigma_P[ind_max], y = mu_P[ind_max]), color = "green", size = 4)
# p <- p + geom_point(aes(x = sigma_P[ind_min], y = mu_P[ind_min]), color = "red", size = 4) ## show min var portfolio
# #p
# ggplotly(p)
```
### Sharpe mean CI

```{r sampledmean-ex}
# port_mean <- replicate(1000, port_sample(R, n_sample = 252, stat = "mean"))
# sim <- port_mean * 252
# low <- quantile(sim, 0.025)
# high <- quantile(sim, 0.975)
# sim_df <- data_frame(sim = sim)
# title <- "Tangency portfolio sampled mean simulation"
# p <- ggplot(data = sim_df, aes(x = sim))
# p <- p + geom_histogram(alpha = 0.7)
# p <- p + ggtitle(title)
# p <- p + geom_vline(xintercept = low, color = "red", size = 1.5 ) + geom_vline(xintercept = high, color = "red", size = 1.5)
# p <- p + annotate("text", x = low + 0.01, y = 200, label = paste("L = ", round(low, 2))) + annotate("text", x = high, y = 200, label = paste("U = ", round(high, 2))) + ylab("density") + xlab("daily mean: max Sharpe Ratio") + theme_bw()
# ggplotly(p)
```
### Sharpe standard deviation CI

```{r sampledsd-ex}
# port_mean <- replicate(1000, port_sample(R, n_sample = 252, stat = "sd"))
# sim <- port_mean * 252
# low <- quantile(sim, 0.025)
# high <- quantile(sim, 0.975)
# sim_df <- data_frame(sim = sim)
# title <- "Tangency portfolio sampled standard deviation simulation"
# p <- ggplot(data = sim_df, aes(x = sim))
# p <- p + geom_histogram(alpha = 0.7)
# p <- p + ggtitle(title)
# p <- p + geom_vline(xintercept = low, color = "red", size = 1.5 ) + geom_vline(xintercept = high, color = "red", size = 1.5)
# p <- p + annotate("text", x = low + 0.1, y = 200, label = paste("L = ", round(low, 2))) + annotate("text", x = high, y = 200, label = paste("U = ", round(high, 2))) + ylab("density") + xlab("daily mean: max Sharpe Ratio") + theme_bw()
# ggplotly(p)
```
### Max Sharpe Ratio loss thresholds

```{r mepcalc}
# price_last <- price_0
# value <- 1000000 # portfolio value
# w_0 <- w_max # wwights -- e.g., min variance or max sharpe
# shares <- value * (w_0/price_last)
# w <- as.numeric(shares * price_last)
# return_hist <- r_2
# # Fan these across the length and breadth of the risk factor series
# weights_rf <- matrix(w, nrow=nrow(return_hist), ncol=ncol(return_hist), byrow=TRUE)
# ## We need to compute exp(x) - 1 for very small x: expm1 accomplishes this
# loss_rf <- -rowSums(expm1(return_hist) * weights_rf)
# loss_df <- data_frame(loss = loss_rf, distribution = rep("historical", each = length(loss_rf)))
# #
# data <- as.vector(loss_rf[loss_rf > 0]) # data is purely numeric
# umin <- min(data) # threshold u min
# umax <- max(data) - 0.1 # threshold u max
# nint <- 100 # grid length to generate mean excess plot
# grid_0 <- numeric(nint) # grid store
# e <- grid_0 # store mean exceedances e
# upper <- grid_0 # store upper confidence interval
# lower <- grid_0 # store lower confidence interval
# u <- seq(umin, umax, length = nint) # threshold u grid
# alpha <- 0.95 # confidence level
# for (i in 1:nint) {
# data <- data[data > u[i]] # subset data above thresholds
# e[i] <- mean(data - u[i]) # calculate mean excess of threshold
# sdev <- sqrt(var(data)) # standard deviation
# n <- length(data) # sample size of subsetted data above thresholds
# upper[i] <- e[i] + (qnorm((1 + alpha)/2) * sdev)/sqrt(n) # upper confidence interval
# lower[i] <- e[i] - (qnorm((1 + alpha)/2) * sdev)/sqrt(n) # lower confidence interval
# }
# mep_df <- data.frame(threshold = u, threshold_exceedances = e, lower = lower, upper = upper)
```
```{r loss-mep}
# Voila the plot => you may need to tweak these limits!
# p <- ggplot(mep_df, aes( x= threshold, y = threshold_exceedances)) + geom_line() + geom_line(aes(x = threshold, y = lower), colour = "red") + geom_line(aes(x = threshold, y = upper), colour = "red") + annotate("text", x = mean(mep_df$threshold), y = max(mep_df$upper)+100, label = "upper 95%") + annotate("text", x = mean(mep_df$threshold), y = min(mep_df$lower) - 100, label = "lower 5%") + ggtitle("Mean Excess Plot: maximum Sharpe Ratio portfolio") + ylab("threshold exceedances")
# ggplotly(p)
```
### Risky capital

```{r loss-capital}
# n_sim <- 1000
# n_sample <- 100
# prob <- 0.95
# ES_sim <- replicate(n_sim, ES_calc(sample(loss_rf, n_sample, replace = TRUE), prob))
# #
# sim <- ES_sim
# low <- quantile(sim, 0.025)
# high <- quantile(sim, 0.975)
# sim_df <- data_frame(sim = sim)
# title <- paste0("Loss Capital Simulation: alpha = ", alpha*100, "% bounds")
# p <- ggplot(data = sim_df, aes(x = sim))
# p <- p + geom_histogram(alpha = 0.4)
# p <- p + ggtitle(title)
# p <- p + geom_vline(xintercept = low, color = "red", size = 1.5 ) + geom_vline(xintercept = high, color = "red", size = 1.5)
# p <- p + annotate("text", x = low, y = 100, label = paste("L = ", round(low, 2))) + annotate("text", x = high, y = 100, label = paste("U = ", round(high, 2))) + ylab("density") + xlab("expected shortfall") + theme_bw()
# ggplotly(p)
```
### Collateral

```{r collateral}
# options(digits = 2, scipen = 99999)
# #
# r_f <- 0.03 # per annum
# mu <- mu_P[ind_max] * 252 # pull mu and sigma for tangency portfolio
# sigma <- sigma_P[ind_max] * sqrt(252)
# #
# sigma_p <- seq(0, sigma + 0.25, length.out = 100)
# mu_p <- r_f + (mu - r_f)*sigma_p/sigma
# w <- sigma_p / sigma
# threshold <- -0.12
# alpha <- 0.05
# z_star <- qnorm(alpha)
# w_star <- (threshold-r_f) / (mu - r_f + sigma*z_star)
# sim_df <- data_frame(sigma_p = sigma_p, mu_p = mu_p, w = w)
# #
# label_42 <- paste(alpha*100, "% alpha, ", threshold*100, "% threshold, \n", round(w_star*100, 2), "% risky asset", sep = "")
# label_0 <- paste(0*100, "% risky asset", sep = "")
# label_100 <- paste(1.00*100, "% risky asset", sep = "")
# p <- ggplot(sim_df, aes(x = sigma_p, y = mu_p)) +
# geom_line(color = "blue", size = 1.1) +
# geom_point(aes(x = 0.0 * sigma, y = r_f + (mu-r_f)*0.0), color = "red", size = 3.0) +
# annotate("text", x = 0.2 * sigma, y = r_f + (mu-r_f)*0.0 + 0.01, label = label_0) +
# geom_point(aes(x = w_star * sigma, y = r_f + (mu-r_f)*w_star), shape = 21, color = "red", fill = "white", size = 4, stroke = 4) +
# annotate("text", x = w_star * sigma + .2, y = r_f + (mu-r_f)*w_star + 0.1, label = label_42) +
# geom_point(aes(x = 1.0 * sigma, y = r_f + (mu-r_f)*1.00), color = "red", size = 3.0) +
# annotate("text", x = 1.0 * sigma, y = r_f + (mu-r_f)*1.00 + 0.01, label = label_100) +
# xlab("standard deviation of portfolio return") +
# ylab("mean of portfolio return") +
# ggtitle("Risk-return tradeoff of cash and risky asset")
# ggplotly(p)
```
### Inital Impact Efficient Frontier (3/11/20)

### Inital Impact Collateral (3/11/20)

Summary
============================================
column {.sidebar}
--------------------------------------------
### DHI
- DEC 12, 2019 [Earnings per share increased 13% and revenues grew 9.4% from a year ago and considered a strong buy!]( https://www.nasdaq.com/articles/d.r.-horton-dhi-looks-strong-going-into-2020%3A-heres-why-2019-12-12)
- One month time, [price dropped from (Feb. 19) $61.88 to $28.78 (March,19) a 53% drop!]( https://nysestockalerts.com/2020/03/18/investors-jump-off-the-fence-d-r-horton-inc-nyse-dhi/)
### MSBHY
- Feb. 5, 2020 [“In the short term, any impact on our business in China will be limited as our business in China is relatively small,” CFO of Mitsubishi Kazuyuki Masu](https://www.reuters.com/article/us-mitsubishi-results/trading-house-mitsubishi-voices-concern-over-coronavirus-impact-idUSKBN1ZZ0KR)
- Mitsubishi 15% drop in net profits due to crude oil imports lessening in Singapore (their larger market)
- The outlier in the expected shortfall section is largely contributed to sliding oil trade.
### SCCO
- Feb. 26 2020 Southern Copper was hopeful that the outbreak would be contained in a timely manner “…we hope that the outbreak, if finally contained, will have a mild impact on the global economy ending in a v-shaped recovery in copper demand.” [(bnamericas)]( https://www.bnamericas.com/en/news/southern-copper-shipments-supplies-unaffected-by-covid-19)
- March, 17 2020 Due to the unsanitary nature of mining in their [main hubs of Peru and Mexico, all miners relieved of work duties]( (https://im-mining.com/2020/03/17/covid-19-slows-work-slows-globally-important-copper-projects-quellaveco-oyu-tolgoi/) along with slowing demand for copper imports since DHI demand is slowing.
column {.tabset}
--------------------------------------------
### Larry: DHI
D.H HORTON (DHI)
What was the market strategy and how has it changed?
- The market strategy of D.H Horton was to expect growth with first-time home buyers after the housing crisis in 2009.
- Demand had expected to go through recovery through the mid-20's.
- [D.H. Horton created “Express Homes”](http://analysisreport.morningstar.com/stock/research/c-report?&t=XNYS:DHI®ion=usa&culture=zh-CN&productcode=QS&cur=&urlCookie=8056723522&e=eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkExMjhHQ00ifQ.jRbbMI400Bb9ZooVDzRwlWTjZLVQK8Wuoe4MTNuK_HRhEGWQC7iTj6RNX_QkUhRaIakTpCwLhioYtggMawqPo_MQQU9oqP5dJxQh_4aQBR9MdISHGljBgRaItM-6xRSIfSxa5GgnDmcwLALZe1tkPz0YI5rXlvXPDOflJOXHEuY.yO-n-CczhDqdbPMI.MPLk1bS9rMP369DrL5Q-RcyIIBeMhHp5svZmp0VIsOjaG0J9Sjk53Uq9PrFqk883utblUw2UqRXSn0ORFuXLSfQcS7fD1y1Y9a7VlubYNWqVfoBlbRZNxjaGidgk62ToWSvSkVztsyLfwQMbaBp1_G3ITrQ-Ud0GTNbK7oCzTZ7IMHwcvonZRt5gnVFRcSpe1EFT7Z3oZt3GtfwoNrI-mq8faPGpqForBnxtrDo.1-3ri8thKyjDFOjF2CIn2Q) that would help penetrate the first-time home buyers as they tended to be more cost conscious. [They had first-mover advantage in their product](http://investor.drhorton.com/overview/strategy).
- Over time, the strategy has changed due to Industry cyclicality, capital intensity, and competition.
- the crisis the strategy had shifted to having a [strong cash balance and overall liquidity position](https://www.zacks.com/stock/news/812539/dr-horton-dhi-gains-but-lags-market-what-you-should-know) and controlling the level of debt and having additional product offerings.
What are the plans moving forward?
- DH had anticipated to have gains in the upcoming quarterly release.
- the stock market is volatile due to COVID19, we are seeing great losses.
- Plans to move forward will be to continue to have a strong cash balance and offer new products during a time with less home buyers.
### Moe: MSBHY
MITSUBISHI (MSHBY)
What was the market strategy and how has it changed?
- Previous market strategy was more focused around [cash-flow, rebalancing of “resources”, and shareholder returns](https://www.mitsubishicorp.com/jp/en/pr/archive/2016/files/0000030155_file1.pdf).
- It has shifted to still focus on organization resources but more focus on managing investments and building upon those portfolios.
What are the plans moving forward?
- The recent market strategy is usually determined with a 3-year plan including focusing on multidimension growth plans.
- Strengthening operations in the services sector and downstream business will help stimulate growth.
- With going into a potential recession due to COVID19 the strategy will need to be adjusted to focus on [products and investments that will be more affordable for a stable portfolio](https://www.mitsubishicorp.com/jp/en/about/plan/)
### Curly: SCCO
Southern Copper Corporation (SCCO)
What was the market strategy and how has it changed?
- The Southern Copper Market Strategy focused on enhancing existing and new projects for revenue gain, cost reduction and increase copper production.
- Over the decades, the cost of copper has been volatile depending on supply and demand.
- As expected, there can be supply disruptions based on natural disasters, economy, and other world events.
- [New emergent technology has also played a role in Coppers supply](http://www.southerncoppercorp.com/ESP/relinv//INFDLPresentaciones/pp150508.pdf ). The market strategy will continue to change as the landscape for products change.
What are the plans moving forward?
- Moving forward, Southern Copper will need to continue to penetrate the markets with the most need.
- For example, there will be a lot of change in the medical industry and demand.
- We'll likely see a new course of action in the coming months due to COVID19.
- Will companies still purchase copper as the financials are down?
- Will the demand in medical supplies with copper such as the IV pole be in higher demand?
- Copper is important from a few locations including China, how will the outbreak affect the imports?
- How will the relationship between medical supplies and Copper will be affected?
- The manufacturing of Copper took a huge plunge throughout their outbreak. [It is reported that Copper will see a decrease in price for now, but in the long haul will stabilize](https://www.zacks.com/stock/news/807799/coronavirus-fears-oil-price-plunge-send-copper-to-3year-low?cid=CS-MKTWTCH-HL-analyst_blog|industry_focus-807799).
Bing COVID 19 Map
============================================
### Map